Search Results: "rra"

26 November 2023

Andrew Cater: MiniDebConf Cambridge - 26th November 2023 - Afternoon sessions

That's all folks ...
Sadly, nothing too much to report.I delivered a very quick three slides lightning talk on Accessibility, WCAG [Web Content Accessibility Guidelines] version 2.2 and a request for Debian to do better

WCAG 2.2: WCAG 2.2 AbstractDebian-accessibility mailing list link: debian-accessibilityI watched the other lightning talks but then left at 1500 - missing three good talks - to drive home at least partly in daylight.

A great four days - the chance to put some names to faces and to recharge in Debian spaces.

Thanks to all involved and especially ...
Thanks to Cambridge Debian folk for helping arrange evening meals, lifts and so on and especially to those who also happen be ARM employees who were badging us in and out through the four days

Thanks to those who staffed Front Desk on both days and, especially, also to the ARM security guards who let us into site at 0745 on all four days and to Mark who did the weekend shift inside the building for Saturday and Sunday.

Thanks to ARM for excellent facilities, food, coffee, hosting us and coffee, to Codethink for sponsoring - and a lecture from Sudip and some interesting hardware - and Pexip for Pexip sponsorship (and employee attendance).

Here's to the next opportunity, whenever that may be.

25 November 2023

Andrew Cater: Laptop with ARM, mobile phone BoF - MiniDebConf Cambridge day 1

So following Emanuele's talk on a Lenovo X13s, we're now at the Debian on Mobile BoF (Birds of a feather) discussion session from Arnaud Ferraris
Discussion and questions on how best to support many variants of mobile phones: the short answer seems to be "it's still *hard* - too many devices around to add individual tweaks for every phone and manufacturer.One thing that may not have been audible in the video soundtrack - lots of laughter in the room prompted as someone's device said, audibly "You are not allowed to do that without unlocking your device"Upstream and downstream packages for hardware enablement are also hard: basic support is sometimes easy but that might even include non-support for charging, for example.Much discussion around the numbers of kernels and kernel image proliferation there could be. Debian tends to prefer *one* way of doing things with kernels.Abstracting hardware is the hardest thing but leads to huge kernels - there's no easy trade-off. Simple/feasible in multiple end user devices/supportable - pick one

20 November 2023

Arturo Borrero Gonz lez: New job at Spryker

Post logo Last month, in October 2023, I started a new job as a Senior Site Reliability Engineer at the Germany-headquartered technology company Spryker. They are primarily focused on e-commerce and infrastructure businesses. I joined this company with excitement and curiosity, as I would be working with a new technology stack: AWS and Terraform. I had not been directly exposed to them in the past, but I was aware of the vast industry impact of both. Suddenly, things like PXE boot, NIC firmware, and Linux kernel upgrades are mostly AWS problems. This is a full-time, 100% remote job position. I m writing this post one month into the new position, and all I can say is that the onboarding was smooth, and I found some good people in my team. Prior to joining Spryker, I was a Senior SRE at the Wikimedia Cloud Services team at the Wikimedia Foundation. I had been there since 2017, for 6 years. It was a privilege working there.

Russ Allbery: Review: The Exiled Fleet

Review: The Exiled Fleet, by J.S. Dewes
Series: Divide #2
Publisher: Tor
Copyright: 2021
ISBN: 1-250-23635-5
Format: Kindle
Pages: 421
The Exiled Fleet is far-future interstellar military SF. It is a direct sequel to The Last Watch. You don't want to start here. The Last Watch took a while to get going, but it ended with some fascinating world-building and a suitably enormous threat. I was hoping Dewes would carry that momentum into the second book. I was disappointed; instead, The Exiled Fleet starts with interpersonal angst and wallowing and takes an annoyingly long time to build up narrative tension again. The world-building of the first book looked outward, towards aliens and strange technology and stranger physics, while setting up contributing problems on the home front. The Exiled Fleet pivots inwards, both in terms of world-building and in terms of character introspection. Neither of those worked as well for me. There's nothing wrong with the revelations here about human power structures and the politics that the Sentinels have been missing at the edge of space, but it also felt like a classic human autocracy without much new to offer in either wee thinky bits or plot structure. We knew most of shape from the start of the first book: Cavalon's grandfather is evil, human society is run as an oligarchy, and everything is trending authoritarian. Once the action started, I was entertained but not gripped the way that I was when reading The Last Watch. Dewes makes a brief attempt to tap into the morally complex question of the military serving as a brake on tyranny, but then does very little with it. Instead, everything is excessively personal, turning the political into less of a confrontation of ideologies or ethics and more a story of family abuse and rebellion. There is even more psychodrama in this book than there was in the previous book. I found it exhausting. Rake is barely functional after the events of the previous book and pushing herself way too hard at the start of this one. Cavalon regresses considerably and starts falling apart again. There's a lot of moping, a lot of angst, and a lot of characters berating themselves and occasionally each other. It was annoying enough that I took a couple of weeks break from this book in the middle before I could work up the enthusiasm to finish it. Some of this is personal preference. My favorite type of story is competence porn: details about something esoteric and satisfyingly complex, a challenge to overcome, and a main character who deploys their expertise to overcome that challenge in a way that shows they generally have their shit together. I can enjoy other types of stories, but that's the story I'll keep reaching for. Other people prefer stories about fuck-ups and walking disasters, people who barely pull together enough to survive the plot (or sometimes not even that). There's nothing wrong with that, and neither approach is right or wrong, but my tolerance for that story is usually lot lower. I think Dewes is heading towards the type of story in which dysfunctional characters compensate for each other's flaws in order to keep each other going, and intellectually I can see the appeal. But it's not my thing, and when the main characters are falling apart and the supporting characters project considerably more competence, I wish the story had different protagonists. It didn't help that this is in theory military SF, but Dewes does not seem to want to deploy any of the support framework of the military to address any of her characters' problems. This book is a lot of Rake and Cavalon dragging each other through emotional turmoil while coming to terms with Cavalon's family. I liked their dynamic in the first book when it felt more like Rake showing leadership skills. Here, it turns into something closer to found family in ways that seemed wildly inconsistent with the military structure, and while I'm normally not one to defend hierarchical discipline, I felt like Rake threw out the only structure she had to handle the thousands of other people under her command and started winging it based on personal friendship. If this were a small commercial crew, sure, fine, but Rake has a personal command responsibility that she obsessively angsts about and yet keeps abandoning. I realize this is probably another way to complain that I wanted competence porn and got barely-functional fuck-ups. The best parts of this series are the strange technologies and the aliens, and they are again the best part of this book. There was a truly great moment involving Viator technology that I found utterly delightful, and there was an intriguing setup for future books that caught my attention. Unfortunately, there were also a lot of deus ex machina solutions to problems, both from convenient undisclosed character backstories and from alien tech. I felt like the characters had to work satisfyingly hard for their victories in the first book; here, I felt like Dewes kept having issues with her characters being at point A and her plot at point B and pulling some rabbit out of the hat to make the plot work. This unfortunately undermined the cool factor of the world-building by making its plot device aspects a bit too obvious. This series also turns out not to be a duology (I have no idea why I thought it would be). By the end of The Exiled Fleet, none of the major political or world-building problems have been resolved. At best, the characters are in a more stable space to start being proactive. I'm cautiously optimistic that could mean the series would turn into the type of story I was hoping for, but I'm worried that Dewes is interested in writing a different type of character story than I am interested in reading. Hopefully there will be some clues in the synopsis of the (as yet unannounced) third book. I thought The Last Watch had some first-novel problems but was worth reading. I am much more reluctant to recommend The Exiled Fleet, or the series as a whole given that it is incomplete. Unless you like dysfunctional characters, proceed with caution. Rating: 5 out of 10

16 November 2023

Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


Photo by Pixabay
Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.

This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.

The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.

The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.

Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.

Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:

For questions and comments please post to Kernel section on Ubuntu Discourse.



12 November 2023

Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance. The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition. Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:
/usr/bin/jq -r '."key"' ~/.config/Signal/config.json
Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:
% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,
      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,
    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,
    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,
    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,
        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,
        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,
        PRIMARY KEY (payloadId, recipientServiceId, deviceId),
        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,
        messageId STRING NOT NULL,
        PRIMARY KEY (payloadId, messageId),
        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,
        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,
        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,
        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy)   uuid   groupId   roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct"   "Group"
        type TEXT NOT NULL, -- enum "Audio"   "Video"   "Group"
        direction TEXT NOT NULL, -- enum "Incoming"   "Outgoing
        -- Direct: enum "Pending"   "Missed"   "Accepted"   "Deleted"
        -- Group: enum "GenericGroupCall"   "OutgoingRing"   "Ringing"   "Joined"   "Missed"   "Declined"   "Accepted"   "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>
Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11 November 2023

Gunnar Wolf: There once was a miniDebConf in Uruguay...

Meeting Debian people for having a good time together, for some good hacking, for learning, for teaching Is always fun and welcome. It brings energy, life and joy. And this year, due to the six-months-long relocation my family and me decided to have to Argentina, I was unable to attend the real deal, DebConf23 at India. And while I know DebConf is an experience like no other, this year I took part in two miniDebConfs. One I have already shared in this same blog: I was in MiniDebConf Tamil Nadu in India, followed by some days of pre-DebConf preparation and scouting in Kochi proper, where I got to interact with the absolutely great and loving team that prepared DebConf. The other one is still ongoing (but close to finishing). Some months ago, I talked with Santiago Ruano, jokin as we were Spanish-speaking DDs announcing to the debian-private mailing list we d be relocating to around R o de la Plata. And things worked out normally: He has been for several months in Uruguay already, so he decided to rent a house for some days, and invite Debian people to do what we do best. I left Paran Tuesday night (and missed my online class at UNAM! Well, you cannot have everything, right?). I arrived early on Wednesday, and around noon came to the house of the keysigning (well, the place is properly called Casa Key , it s a publicity agency that is also rented as a guesthouse in a very nice area of Montevideo, close to Nuevo Pocitos beach). In case you don t know it, Montevideo is on the Northern (or Eastern) shore of R o de la Plata, the widest river in the world (up to 300Km wide, with current and non-salty water). But most important for some Debian contributors: You can even come here by boat! That first evening, we received Ilu, who was in Uruguay by chance for other issues (and we were very happy about it!) and a young and enthusiastic Uruguayan, Felipe, interested in getting involved in Debian. We spent the evening talking about life, the universe and everything Which was a bit tiring, as I had to interface between Spanish and English, talking with two friends that didn t share a common language On Thursday morning, I went out for an early walk at the beach. And lets say, if only just for the narrative, that I found a lost penguin emerging from R o de la Plata! For those that don t know (who d be most of you, as he has not been seen at Debian events for 15 years), that s Lisandro Dami n Nicanor P rez Meyer (or just lisandro), long-time maintainer of the Qt ecosystem, and one of our embedded world extraordinaires. So, after we got him dry and fed him fresh river fishes, he gave us a great impromptu talk about understanding and finding our way around the Device Tree Source files for development boards and similar machines, mostly in the ARM world. From Argentina, we also had Emanuel (eamanu) crossing all the way from La Rioja. I spent most of our first workday getting my laptop in shape to be useful as the driver for my online class on Thursday (which is no small feat people that know the particularities of my much loved ARM-based laptop will understand), and running a set of tests again on my Raspberry Pi labortory, which I had not updated in several months. I am happy to say we are also finally also building Raspberry images for Trixie (Debian 13, Testing)! Sadly, I managed to burn my USB-to-serial-console (UART) adaptor, and could neither test those, nor the oldstable ones we are still building (and will probably soon be dropped, if not for anything else, to save disk space). We enjoyed a lot of socialization time. An important highlight of the conference for me was that we reconnected with a long-lost DD, Eduardo Tr pani, and got him interested in getting involved in the project again! This second day, another local Uruguayan, Mauricio, joined us together with his girlfriend, Alicia, and Felipe came again to hang out with us. Sadly, we didn t get photographic evidence of them (nor the permission to post it). The nice house Santiago got for us was very well equipped for a miniDebConf. There were a couple of rounds of pool played by those that enjoyed it (I was very happy just to stand around, take some photos and enjoy the atmosphere and the conversation). Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be leaving the house by noon. It was also a very productive day! We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org. It has been a great couple of days! Sadly, it s coming to an end But this at least gives me the opportunity (and moral obligation!) to write a long blog post. And to thank Santiago for organizing this, and Debian, for sponsoring our trip, stay, foods and healthy enjoyment!

7 November 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

5 November 2023

Vincent Bernat: Non-interactive SSH password authentication

SSH offers several forms of authentication, such as passwords and public keys. The latter are considered more secure. However, password authentication remains prevalent, particularly with network equipment.1 A classic solution to avoid typing a password for each connection is sshpass, or its more correct variant passh. Here is a wrapper for Zsh, getting the password from pass, a simple password manager:2
pssh()  
  passh -p <(pass show network/ssh/password   head -1) ssh "$@"
 
compdef pssh=ssh
This approach is a bit brittle as it requires to parse the output of the ssh command to look for a password prompt. Moreover, if no password is required, the password manager is still invoked. Since OpenSSH 8.4, we can use SSH_ASKPASS and SSH_ASKPASS_REQUIRE instead:
ssh()  
  set -o localoptions -o localtraps
  local passname=network/ssh/password
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF
#!$SHELL
pass show $passname   head -1
EOF
  chmod u+x $helper
  SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@"
 
If the password is incorrect, we can display a prompt on the second tentative:
ssh()  
  set -o localoptions -o localtraps
  local passname=network/ssh/password
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF
#!$SHELL
if [ -k $helper ]; then
   
    oldtty=\$(stty -g)
    trap 'stty \$oldtty < /dev/tty 2> /dev/null' EXIT INT TERM HUP
    stty -echo
    print "\rpassword: "
    read password
    printf "\n"
    > /dev/tty < /dev/tty
  printf "%s" "\$password"
else
  pass show $passname   head -1
  chmod +t $helper
fi
EOF
  chmod u+x $helper
  SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@"
 
A possible improvement is to use a different password entry depending on the remote host:3
ssh()  
  # Grab login information
  local -A details
  details=($ =$ (M)$ :-"$ (@f)$(command ssh -G "$@" 2>/dev/null) " :#(host hostname user) * )
  local remote=$ details[host]:-details[hostname] 
  local login=$ details[user] @$ remote 
  # Get password name
  local passname
  case "$login" in
    admin@*.example.net)  passname=company1/ssh/admin ;;
    bernat@*.example.net) passname=company1/ssh/bernat ;;
    backup@*.example.net) passname=company1/ssh/backup ;;
  esac
  # No password name? Just use regular SSH
  [[ -z $passname ]] &&  
    command ssh "$@"
    return $?
   
  # Invoke SSH with the helper for SSH_ASKPASS
  # [ ]
 
It is also possible to make scp invoke our custom ssh function:
scp()  
  set -o localoptions -o localtraps
  local helper=$(mktemp)
  trap "command rm -f $helper" EXIT INT
  > $helper <<EOF 
#!$SHELL
source $ (%):-%x 
ssh "\$@"
EOF
  command scp -S $helper "$@"
 
For the complete code, have a look at my zshrc. As an alternative, you can put the ssh() function body into its own script file and replace command ssh with /usr/bin/ssh to avoid an unwanted recursive call. In this case, the scp() function is not needed anymore.

  1. First, some vendors make it difficult to associate an SSH key with a user. Then, many vendors do not support certificate-based authentication, making it difficult to scale. Finally, interactions between public-key authentication and finer-grained authorization methods like TACACS+ and Radius are still uncharted territory.
  2. The clear-text password never appears on the command line, in the environment, or on the disk, making it difficult for a third party without elevated privileges to capture it. On Linux, Zsh provides the password through a file descriptor.
  3. To decipher the fourth line, you may get help from print -l and the zshexpn(1) manual page. details is an associative array defined from an array alternating keys and values.

Thorsten Alteholz: My Debian Activities in October 2023

FTP master This month I accepted 361 and rejected 34 packages. The overall number of packages that got accepted was 362. Debian LTS This was my hundred-twelfth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded: Unfortunately upstream still could not resolve whether the patch for CVE-2023-42118 of libspf2 is valid, so no progress happened here.
I also continued to work on bind9 and try to understand why some tests fail. Last but not least I did some days of frontdesk duties and took part in the LTS meeting. Debian ELTS This month was the sixty-third ELTS month. During my allocated time I uploaded: I also continued to work on bind9 and as with the version in LTS, I try to understand why some tests fail. Last but not least I did some days of frontdesk duties . Debian Printing This month I uploaded a new upstream version of: Within the context of preserving old printing packages, I adopted: If you know of any other package that is also needed and still maintained by the QA team, please tell me. I also uploaded new upstream version of packages or uploaded a package to fix one or the other issue: This work is generously funded by Freexian! Debian Mobcom This month I uploaded a package to fix one or the other issue: Other stuff This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

1 November 2023

Matthew Garrett: Why ACPI?

"Why does ACPI exist" - - the greatest thread in the history of forums, locked by a moderator after 12,239 pages of heated debate, wait no let me start again.

Why does ACPI exist? In the beforetimes power management on x86 was done by jumping to an opaque BIOS entry point and hoping it would do the right thing. It frequently didn't. We called this Advanced Power Management (Advanced because before this power management involved custom drivers for every machine and everyone agreed that this was a bad idea), and it involved the firmware having to save and restore the state of every piece of hardware in the system. This meant that assumptions about hardware configuration were baked into the firmware - failed to program your graphics card exactly the way the BIOS expected? Hurrah! It's only saved and restored a subset of the state that you configured and now potential data corruption for you. The developers of ACPI made the reasonable decision that, well, maybe since the OS was the one setting state in the first place, the OS should restore it.

So far so good. But some state is fundamentally device specific, at a level that the OS generally ignores. How should this state be managed? One way to do that would be to have the OS know about the device specific details. Unfortunately that means you can't ship the computer without having OS support for it, which means having OS support for every device (exactly what we'd got away from with APM). This, uh, was not an option the PC industry seriously considered. The alternative is that you ship something that abstracts the details of the specific hardware and makes that abstraction available to the OS. This is what ACPI does, and it's also what things like Device Tree do. Both provide static information about how the platform is configured, which can then be consumed by the OS and avoid needing device-specific drivers or configuration to be built-in.

The main distinction between Device Tree and ACPI is that Device Tree is purely a description of the hardware that exists, and so still requires the OS to know what's possible - if you add a new type of power controller, for instance, you need to add a driver for that to the OS before you can express that via Device Tree. ACPI decided to include an interpreted language to allow vendors to expose functionality to the OS without the OS needing to know about the underlying hardware. So, for instance, ACPI allows you to associate a device with a function to power down that device. That function may, when executed, trigger a bunch of register accesses to a piece of hardware otherwise not exposed to the OS, and that hardware may then cut the power rail to the device to power it down entirely. And that can be done without the OS having to know anything about the control hardware.

How is this better than just calling into the firmware to do it? Because the fact that ACPI declares that it's going to access these registers means the OS can figure out that it shouldn't, because it might otherwise collide with what the firmware is doing. With APM we had no visibility into that - if the OS tried to touch the hardware at the same time APM did, boom, almost impossible to debug failures (This is why various hardware monitoring drivers refuse to load by default on Linux - the firmware declares that it's going to touch those registers itself, so Linux decides not to in order to avoid race conditions and potential hardware damage. In many cases the firmware offers a collaborative interface to obtain the same data, and a driver can be written to get that. this bug comment discusses this for a specific board)

Unfortunately ACPI doesn't entirely remove opaque firmware from the equation - ACPI methods can still trigger System Management Mode, which is basically a fancy way to say "Your computer stops running your OS, does something else for a while, and you have no idea what". This has all the same issues that APM did, in that if the hardware isn't in exactly the state the firmware expects, bad things can happen. While historically there were a bunch of ACPI-related issues because the spec didn't define every single possible scenario and also there was no conformance suite (eg, should the interpreter be multi-threaded? Not defined by spec, but influences whether a specific implementation will work or not!), these days overall compatibility is pretty solid and the vast majority of systems work just fine - but we do still have some issues that are largely associated with System Management Mode.

One example is a recent Lenovo one, where the firmware appears to try to poke the NVME drive on resume. There's some indication that this is intended to deal with transparently unlocking self-encrypting drives on resume, but it seems to do so without taking IOMMU configuration into account and so things explode. It's kind of understandable why a vendor would implement something like this, but it's also kind of understandable that doing so without OS cooperation may end badly.

This isn't something that ACPI enabled - in the absence of ACPI firmware vendors would just be doing this unilaterally with even less OS involvement and we'd probably have even more of these issues. Ideally we'd "simply" have hardware that didn't support transitioning back to opaque code, but we don't (ARM has basically the same issue with TrustZone). In the absence of the ideal world, by and large ACPI has been a net improvement in Linux compatibility on x86 systems. It certainly didn't remove the "Everything is Windows" mentality that many vendors have, but it meant we largely only needed to ensure that Linux behaved the same way as Windows in a finite number of ways (ie, the behaviour of the ACPI interpreter) rather than in every single hardware driver, and so the chances that a new machine will work out of the box are much greater than they were in the pre-ACPI period.

There's an alternative universe where we decided to teach the kernel about every piece of hardware it should run on. Fortunately (or, well, unfortunately) we've seen that in the ARM world. Most device-specific simply never reaches mainline, and most users are stuck running ancient kernels as a result. Imagine every x86 device vendor shipping their own kernel optimised for their hardware, and now imagine how well that works out given the quality of their firmware. Does that really seem better to you?

It's understandable why ACPI has a poor reputation. But it's also hard to figure out what would work better in the real world. We could have built something similar on top of Open Firmware instead but the distinction wouldn't be terribly meaningful - we'd just have Forth instead of the ACPI bytecode language. Longing for a non-ACPI world without presenting something that's better and actually stands a reasonable chance of adoption doesn't make the world a better place.

comment count unavailable comments

26 October 2023

Dima Kogan: Talking to ROS from outside a LAN

The problem
This is about ROS version 1. Version 2 is different, and maybe they fixed stuff. But I kinda doubt it since this thing is heinous in a million ways. Alright so let's say we have have some machines in a LAN doing ROS stuff and we have another machine outside the LAN that wants to listen in (like to get a realtime visualization, say). This is an extremely common scenario, but they created enough hoops to make this not work. Let's say we have 3 computers:
  • router: the bridge between the two networks. This has two NICs. The inner IP is 10.0.1.1 and the outer IP is 12.34.56.78
  • inner: a machine in the LAN that's doing ROS stuff. IP 10.0.1.99
  • outer: a machine outside that LAN that wants to listen in. IP 12.34.56.99
Let's say the router is doing ROS stuff. It's running the ROS master and some nodes like this:
ROS_IP=10.0.1.1 roslaunch whatever
If you omit the ROS_IP it'll pick router, which may or may not work, depending on how the DNS is set up. Here we set it to 10.0.1.1 to make it possible for the inner machine to communicate (we'll see why in a bit). An aside: ROS should use the IP by default instead of the name because the IP will work even if the DNS isn't set up. If there are multiple extant IPs, it should throw an error. But all that would be way too user-friendly. OK. So we have a ROS master on 10.0.1.1 on the default port: 11311. The inner machine can rostopic echo and all that. Great. What if I try to listen in from outer? I say
ROS_MASTER_URI=http://12.34.56.78:11311 rostopic list
This connects to the router on that port, and it works well: I get the list of available topics. Here this works because the router is the router. If inner was running the ROS master then we'd need to do a forward for port 11311. In any case, this works and we understand it. So clearly we can talk to the ROS master. Right? Wrong! Let's actually listen in on a specific topic on outer:
ROS_MASTER_URI=http://12.34.56.78:11311 rostopic echo /some/topic
This does not work. No errors are reported. It just sits there, which looks like no data is coming in on that topic. But this is a lie: it's actually broken.

The diagnosis
So this is our problem. It's a very common use case, and there are plenty of internet people asking about it, with no specific solutions. I debugged it, and the details are here. To figure out what's going on, I made a syscall log on a machine inside the LAN, where a simple rostopic echo does work:
sysdig -A proc.name=rostopic and fd.type contains ipv -s 2000
This shows us all the communication between inner running rostopic and the server. It's really chatty. It's all TCP. There are multiple connections to the router on port 11311. It also starts up multiple TCP servers on the client that listen to connections; these are likely to be broken if we were running the client on outer and a machine inside the LAN tried to talk to them; but thankfully in my limited testing nothing actually tried to talk to them. The conversations on port 11311 are really long, but here's the punchline. inner tells the router:
POST /RPC2 HTTP/1.1                                                                                                                 
Host: 10.0.1.1:11311                                                                                                          
Accept-Encoding: gzip                                                                                                               
Content-Type: text/xml                                                                                                              
User-Agent: Python-xmlrpc/3.11                                                                                                      
Content-Length: 390                                                                                                                 
<?xml version='1.0'?>
<methodCall>
<methodName>registerSubscriber</methodName>
<params>
<param>
<value><string>/rostopic_2447878_1698362157834</string></value>
</param>
<param>
<value><string>/some/topic</string></value>
</param>
<param>
<value><string>*</string></value>
</param>
<param>
<value><string>http://inner:38229/</string></value>
</param>
</params>
</methodCall>
Yes. It's laughably chatty. Then the router replies:
HTTP/1.1 200 OK
Server: BaseHTTP/0.6 Python/3.8.10
Date: Thu, 26 Oct 2023 23:15:28 GMT
Content-type: text/xml
Content-length: 342
<?xml version='1.0'?>
<methodResponse>
<params>
<param>
<value><array><data>
<value><int>1</int></value>
<value><string>Subscribed to [/some/topic]</string></value>
<value><array><data>
<value><string>http://10.0.1.1:45517/</string></value>
</data></array></value>
</data></array></value>
</param>
</params>
</methodResponse>
Then this sequence of system calls happens in the rostopic process (an excerpt from the sysdig log):
> connect fd=10(<4>) addr=10.0.1.1:45517
< connect res=-115(EINPROGRESS) tuple=10.0.1.99:47428->10.0.1.1:45517 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517)
< getsockopt res=0 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=0 optlen=4
So the inner client makes an outgoing TCP connection on the address given to it by the ROS master above: 10.0.1.1:45517. This IP is only accessible from within the LAN, which works fine when talking to it from inner, but would be a problem from the outside. Furthermore, some sort of single-port-forwarding scheme wouldn't fix connecting from outer either, since the port number is dynamic. To confirm what we think is happening, the sequence of syscalls when trying to rostopic echo from outer does indeed fail:
connect fd=10(<4>) addr=10.0.1.1:45517 
connect res=-115(EINPROGRESS) tuple=10.0.1.1:46204->10.0.1.1:45517 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517)
getsockopt res=0 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=-111(ECONNREFUSED) optlen=4
That's the breakage mechanism: the ROS master asks us to communicate on an address we can't talk to. Debugging this is easy with sysdig:
sudo sysdig -A -s 400 evt.buffer contains '"Subscribed to"' and proc.name=rostopic
This prints out all syscalls seen by the rostopic command that contain the string Subscribed to, so you can see that different addresses the ROS master gives us in response to different commands. OK. So can we get the ROS master to give us an address that we can actually talk to? Sorta. Remember that we invoked the master with
ROS_IP=10.0.1.1 roslaunch whatever
The ROS_IP environment variable is exactly the address that the master gives out. So in this case, we can fix it by doing this instead:
ROS_IP=12.34.56.78 roslaunch whatever
Then the outer machine will be asked to talk to 12.34.56.78:45517, which works. Unfortunately, if we do that, then the inner machine won't be able to communicate. So some sort of ssh port forward cannot fix this: we need a lower-level tunnel, like a VPN or something. And another rant. Here rostopic tried to connect to an unreachable address, which failed. But rostopic knows the connection failed! It should throw an error message to the user. Something like this would be wonderful:
ERROR! Tried to connect to 10.0.1.1:45517 ($ROS_IP:dynamicport), but connect() returned ECONNREFUSED
That would be immensely helpful. It would tell the user that something went wrong (instead of no data being sent), and it would give a strong indication of the problem and how to fix it. But that would be asking too much.

The solution
So we need a VPN-like thing. I just tried sshuttle, and it just works. Start the ROS node in the way that makes connections from within the LAN work:
ROS_IP=10.0.1.1 roslaunch whatever
Then on the outer client:
sshuttle -r router 10.0.1.0/24
This connects to the router over ssh and does some hackery to make all connections from outer to 10.0.1.x transparently route into the LAN. On all ports. rostopic echo then works. I haven't done any thorough testing, but hopefully it's reliable and has low overhead; I don't know. I haven't tried it but almost certainly this would work even with the ROS master running on inner. This would be accomplished like this:
  1. Tell ssh how to connect to inner. Dropping this into ~/.ssh/config should do it:
    Host inner
    HostName 10.0.1.99
    ProxyJump router
    
  2. Do the magic thing:
    sshuttle -r inner 10.0.1.0/24
    
I'm sure any other VPN-like thing would work also.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

Ian Jackson: DigiSpark (ATTiny85) - Arduino, C, Rust, build systems

Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way. Here follows the rant. Introduction In a recent project (I ll write about the purpose, and the hardware in another post) I chose to use a DigiSpark board. This is a small board with a USB-A tongue (but not a proper plug), and an ATTiny85 microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience. Convenient hardware, I got. Arduino IDE The usual way to program these boards is via an IDE. I thought I d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package arduino was in Debian. But it turns out that the Debian package s version doesn t support the DigiSpark. (AFAICT from the list it offered me, I m not sure it supports any ATTiny85 board.) Also, disturbingly, its board manager seemed to be offering to install board support, suggesting it would download stuff from the internet and run it. That wouldn t be acceptable for my main laptop. I didn t expect to be doing much programming or debugging, and the project didn t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security video laptop . That s the machine where I m prepared to say yes to installing random software off the internet. So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what. However, my tiny test program didn t make it to the board. Half-buried in a too-small window was an error message about the board s bootloader ( Micronucleus ) being too new. The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark s board package?) still contains an old version. So now we have all the downsides of curl bash-ware, but we re lacking the it s up to date and it just works upsides. Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,) So, whatever . I did that. And it worked! Having demo d my ability to run code on the board, I set about writing my program. Writing C again The programming language offered via the Arduino IDE is C. It s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C s primitiveness quickly started to grate, and the program couldn t easily be as DRY as I wanted (Don t Repeat Yourself, see Wilson et al, 2012, 4, p.6). But, I carried on; after all, this was going to be quite a small job. Soon enough I had a program that looked right and compiled. Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #included my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn t seem to offer it, and I wasn t inclined to go further down that kind of path.) So I got the video laptop out, and used the Arduino IDE to flash the program. It didn t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows: Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren t the default in C in general because C compilers hate us all for compatibility reasons. I don t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don t like error messages so we shouldn t report errors. And novice programmers are especially fazed by error messages so it s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C? The Arduino IDE does offer a dropdown for compiler warnings . The default is None. Setting it to All didn t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the log window has a constant stream of strange messages from javax.jmdns, with hex DNS packet dumps. WTF. Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put nearby its sketch (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout). It wasn t really very suited to a workflow where principal development occurs elsewhere. And, important settings such as the project s clock speed, or even the target board, or the compiler warning settings to use weren t stored in the project directory along with the actual code. I didn t look too hard, but I presume they must be in a dotfile somewhere. This is madness. Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.) As for the integer overflow bug: I didn t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface). Instead, I thought, if only I had written the thing in Rust . But that wasn t possible, was it? Does Rust even support this board? Rust on the DigiSpark I did a cursory web search and found a very useful blog post by Dylan Garrett. This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io. I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the trinket LED blink example, referenced by Dylan s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board! I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop. I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior. So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path references. That didn t work. Now it didn t build. Even after I copied about .cargo/config.toml and rust-toolchain.toml it didn t build, producing a variety of exciting messages, depending what precisely I tried. I don t have detailed logs of my flailing: the instructions say to build it by cd ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it s not usually fun. Looking at some of the build control files, things seemed quite complicated. Additionally, not all of the crates are on crates.io. I have no idea why not. So, I would need to supply local copies of them anyway. I decided to just git subtree add the avr-hal git tree. (That seemed better than the approach taken by the avr-hal project s cargo template, since that template involve a cargo dependency on a foreign git repository. Perhaps it would be possible to turn them into path dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don t like package templates very much. They re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.) Since I couldn t get things to build outside avr-hal, I edited the example, within avr-hal, to refer to my (one) program.rs file outside avr-hal, with a #[path] instruction. That s not pretty, but it worked. I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo privsep tool for builds where cargo must be invoked in a deep subdirectory, and/or Cargo.lock isn t where it expects, and/or the target directory containing build products is in a weird place. It also has to filter the output from cargo to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build and cd B; cargo build from a Makefile produces confusing sets of error messages, some of which contain filenames relative to A and some relative to B, making it impossible for my Emacs to reliably find the right file. RIIR (Rewrite It In Rust) Having got my build tooling sorted out I could go back to my actual program. I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting). Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away! RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust s greater power, compared to C, made those cleanups easier, so making them worthwhile. However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.) The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn t bear fruit. So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware s notion of time, I simply changed my delay function to adjust the passed-in delay values, compensating for the wrong clock speed. There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms. An offer of help As will be obvious from this posting, I m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.) But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions. An obvious candidate: it seems to me that micronucleus could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino package. Unfortunately, IMO Debian s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself. Conclusions Embedded programming is still rather a mess and probably always will be. Embedded build systems can be bizarre. Documentation is scant. You re often expected to download board support packages full of mystery binaries, from the board vendor (or others). Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You re way off the beaten track. As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives. All is not lost Rust can be a significantly better bet than C for embedded software: The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that s precisely what you want. Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language s normal build system. Overall, less bad software supply chain integrity. The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)

comment count unavailable comments

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

12 October 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2023 (by Santiago Ruano Rinc n)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 21 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 7.0h (out of 17.0h assigned), thus carrying over 10.0h to the next month.
  • Anton Gladky did 9.5h (out of 7.5h assigned and 7.5h from previous period), thus carrying over 5.5h to the next month.
  • Bastien Roucari s did 16.0h (out of 15.5h assigned and 1.5h from previous period), thus carrying over 1.0h to the next month.
  • Ben Hutchings did 17.0h (out of 17.0h assigned).
  • Chris Lamb did 17.0h (out of 17.0h assigned).
  • Emilio Pozuelo Monfort did 30.0h (out of 30.0h assigned).
  • Guilhem Moulin did 18.25h (out of 18.25h assigned).
  • Helmut Grohne did 10.0h (out of 10.0h assigned).
  • Lee Garrett did 17.0h (out of 16.5h assigned and 0.5h from previous period).
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 4.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 19.5h to the next month.
  • Roberto C. S nchez did 5.0h (out of 12.0h assigned), thus carrying over 7.0h to the next month.
  • Santiago Ruano Rinc n did 7.75h (out of 16.0h assigned), thus carrying over 8.25h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Sylvain Beucler did 10.5h (out of 17.0h assigned), thus carrying over 6.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 13.25h (out of 16.0h assigned), thus carrying over 2.75h to the next month.

Evolution of the situation In September, we have released 44 DLAs. The month of September was a busy month for the LTS Team. A notable security issue fixed in September was the high-severity CVE-2023-4863, a heap buffer overflow that allowed remote attackers to perform an out-of-bounds memory write via a crafted WebP file. This CVE was covered by the three DLAs of different packages: firefox-esr, libwebp and thunderbird. The libwebp backported patch was sent to upstream, who adapted and applied it to the 0.6.1 branch. It is also worth noting that LTS contributor Markus Koschany included in his work updates to packages in Debian Bullseye and Bookworm, that are under the umbrella of the Security Team: xrdp, jetty9 and mosquitto. As every month, there was important behind-the-scenes work by the Front Desk staff, who triaged, analyzed and reviewed dozens of vulnerabilities, to decide if they warrant a security update. This is very important work, since we need to trade-off between the frequency of updates and the stability of the LTS release.

Thanks to our sponsors Sponsors that joined recently are in bold.

8 October 2023

Niels Thykier: A new Debian package helper: debputy

I have made a new helper for producing Debian packages called debputy. Today, I uploaded it to Debian unstable for the first time. This enables others to migrate their package build using dh +debputy rather than the classic dh. Eventually, I hope to remove dh entirely from this equation, so you only need debputy. But for now, debputy still leverages dh support for managing upstream build systems. The debputy tool takes a radicially different approach to packaging compared to our existing packaging methods by using a single highlevel manifest instead of all the debian/install (etc.) and no hook targets in debian/rules. Here are some of the things that debputy can do or does: There are also some features that debputy does not support at the moment: There are all limitations of the current work in progress. I hope to resolve them all in due time.

Trying debputy With the limitations aside, lets talk about how you would go about migrating a package:
# Assuming here you have already run: apt install dh-debputy
$ git clone https://salsa.debian.org/rra/kstart
[...]
$ cd kstart
# Add a Build-Dependency on dh-sequence-debputy
$ perl -n -i -e \
   'print; print " dh-sequence-debputy,\n" if m/debhelper-compat/;' \
    debian/control
$ debputy migrate-from-dh --apply-changes
debputy: info: Loading plugin debputy (version: archive/debian/4.3-1) ...
debputy: info: Verifying the generating manifest
debputy: info: Updated manifest debian/debputy.manifest
debputy: info: Removals:
debputy: info:   rm -f "./debian/docs"
debputy: info:   rm -f "./debian/examples"
debputy: info: Migrations performed successfully
debputy: info: Remember to validate the resulting binary packages after rebuilding with debputy
$ cat debian/debputy.manifest 
manifest-version: '0.1'
installations:
- install-docs:
    sources:
    - NEWS
    - README
    - TODO
- install-examples:
    source: examples/krenew-agent
$ git add debian/debputy.manifest
$ git commit --signoff -am"Migrate to debputy"
# Run build tool of choice to verify the output.
This is of course a specific example that works out of the box. If you were to try this on debianutils (from git), the output would look something like this:
$ debputy migrate-from-dh
debputy: info: Loading plugin debputy (version: 5.13-13-g9836721) ...
debputy: error: Unable to migrate automatically due to missing features in debputy.
  * The "debian/triggers" debhelper config file (used by dh_installdeb is currently not supported by debputy.
Use --acceptable-migration-issues=[...] to convert this into a warning [...]
And indeed, debianutils requires at least 4 debhelper features beyond what debputy can support at the moment (all related to maintscripts and triggers).

Rapid feedback Rapid feedback cycles are important for keeping developers engaged in their work. The debputy tool provides the following features to enable rapid feedback.

Immediate manifest validation It would be absolutely horrible if you had to do a full-rebuild only to realize you got the manifest syntax wrong. Therefore, debputy has a check-manifest command that checks the manifest for syntactical and semantic issues.
$ cat debian/debputy.manifest
manifest-version: '0.1'
installations:
- install-docs:
    sources:
    - GETTING-STARTED-WITH-dh-debputy.md
    - MANIFEST-FORMAT.md
    - MIGRATING-A-DH-PLUGIN.md
$ debputy check-manifest
debputy: info: Loading plugin debputy (version: 0.1.7-1-gf34bd66) ...
debputy: info: No errors detected.
$ cat <<EOF >> debian/debputy.manifest
- install:
    sourced: foo
    as: usr/bin/foo
EOF
# Did I typo anything?
$ debputy check-manifest
debputy: info: Loading plugin debputy (version: 0.1.7-2-g4ef8c2f) ...
debputy: warning: Possible typo: The key "sourced" at "installations[1].install" should probably have been 'source'
debputy: error: Unknown keys " 'sourced' " at installations[1].install".  Keys that could be used here are: sources, when, dest-dir, source, into.
debputy: info: Loading plugin debputy (version: 0.1.7-2-g4ef8c2f) ...
$ sed -i s/sourced:/source:/ debian/debputy.manifest
$ debputy check-manifest
debputy: info: Loading plugin debputy (version: 0.1.7-2-g4ef8c2f) ...
debputy: info: No errors detected.
The debputy check-manifest command is limited to the manifest itself and does not warn about foo not existing as it could be produced as apart of the upstream build system. Therefore, there are still issues that can only be detected at package build time. But where debputy can reliably give you immediate feedback, it will do so.

Idempotence: Clean re-runs of dh_debputy without clean/rebuild If you read the fine print of many debhelper commands, you may see the following note their manpage:
This command is not idempotent. dh_prep(1) should be called between invocations of this command Manpage of an anonymous debhelper tool
What this usually means, is that if you run the command twice, you will get its maintscript change (etc.) twice in the final deb. This fits into our single-use clean throw-away chroot builds on the buildds and CI as well as dpkg-buildpackage s no-clean (-nc) option. Single-use throw-away chroots are not very helpful for debugging though, so I rarely use them when doing the majority of my packaging work as I do not want to wait for the chroot initialization (including installing of build-depends). But even then, I have found that dpkg-buildpackage -nc has been useless for me in many cases as I am stuck between two options:
  • With -nc, you often still interact with the upstream build system. As an example, debhelper will do a dh_prep followed by dh_auto_install, so now we are waiting for upstream s install target to run again. What should have taken seconds now easily take 0.5-1 minute extra per attempt.
  • If you want to by-pass this, you have to manually call the helpers needed (in correct order) and every run accumulates cruft from previous runs to the point that cruft drowns out the actual change you want to see. Also, I am rarely in the mood to play human dh, when I am debugging an issue that I failed to fix in my first, second and third try.
As you can probably tell, neither option has worked that well for me. But with dh_debputy, I have made it a goal that it will not self-taint the final output. If dh_debputy fails, you should be able to tweak the manifest and re-run dh_debputy with the same arguments.
  • No waiting for dpkg-buildpackage -nc nor anything implied by that.
  • No self-tainting of the final deb. The result you get, is the result you would have gotten if the previous dh_debputy run never happened.
  • Because dh_debputy produces the final result, I do not have to run multiple tools in the right order.
Obviously, this is currently a lot easier, because debputy is not involved in the upstream build system at all. If this feature is useful to you, please do let me know and I will try to preserve it as debputy progresses in features.

Packager provided files On a different topic, have you ever wondered what kind of files you can place into the debian directory that debhelper automatically picks up or reacts too? I do not have an answer to that beyond it is over 80 files and that as the maintainer of debhelper, I am not willing to manually maintain such a list manually. However, I do know what the answer is in debputy, because I can just ask debputy:
$ debputy plugin list packager-provided-files
+-----------------------------+---------------------------------------------[...]
  Stem                          Installed As                                [...]
+-----------------------------+---------------------------------------------[...]
  NEWS                          /usr/share/doc/ name /NEWS.Debian           [...]
  README.Debian                 /usr/share/doc/ name /README.Debian         [...]
  TODO                          /usr/share/doc/ name /TODO.Debian           [...]
  bug-control                   /usr/share/bug/ name /control               [...]
  bug-presubj                   /usr/share/bug/ name /presubj               [...]
  bug-script                    /usr/share/bug/ name /script                [...]
  changelog                     /usr/share/doc/ name /changelog.Debian      [...]
  copyright                     /usr/share/doc/ name /copyright             [...]
[...]
This will list all file types (Stem column) that debputy knows about and it accounts for any plugin that debputy can find. Note to be deterministic, debputy will not auto-load plugins that have not been explicitly requested during package builds. So this list could list files that are available but not active for your current package. Note the output is not intended to be machine readable. That may come in later version. Feel free to chime in if you have a concrete use-case.

Take it for a spin As I started this blog post with, debputy is now available in unstable. I hope you will take it for a spin on some of your simpler packages and provide feedback on it.  For documentation, please have a look at: Thanks for considering PS: My deepest respect to the fakeroot maintainers. That game of whack-a-mole is not something I would have been willing to maintain. I think fakeroot is like the Python GIL in the sense that it has been important in getting Debian to where it is today. But at the same time, I feel it is time to let go of the crutch and find a proper solution.

4 October 2023

Russ Allbery: Review: The Last Watch

Review: The Last Watch, by J.S. Dewes
Series: Divide #1
Publisher: Tor
Copyright: 2021
ISBN: 1-250-23634-7
Format: Kindle
Pages: 476
The Last Watch is the first book of a far-future science fiction duology. It was J.S. Dewes's first novel. The station of the SCS Argus is the literal edge of the universe: the Divide, beyond which there is nothing. Not simply an absence of stars, but a nothing from a deeper level of physics. The Argus is there to guard against a return of the Viators, the technologically superior alien race that nearly conquered humanity hundreds of years prior and has already returned once, apparently traveling along the Divide. Humanity believes the Viators have been wiped out, but they're not taking chances. It is not a sought-after assignment. The Sentinels are the dregs of the military: convicts, troublemakers, and misfits, banished to the literal edge of nowhere. Joining them at the start of this book is the merchant prince, cocky asshole, and exiled sabateur Cavalon Mercer. He doesn't know what to expect from either military service or service on the edge of the universe. He certainly did not expect the Argus to be commanded by Adequin Rake, a literal war hero and a far more effective leader than this post would seem to warrant. There are reasons why Rake is out on the edge of the universe, ones that she's not eager to talk about. They quickly become an afterthought when the Argus discovers that the Divide is approaching their position. The universe is collapsing, and the only people who know about it are people the System Collective would prefer to forget exist. Yes, the edge of the universe, not the edge of the galaxy. Yes, despite having two FTL mechanisms, this book has a scale problem that it never reconciles. And yes, the physics do not really make sense, although this is not the sort of book that tries to explain the science. The characters are too busy trying to survive to develop new foundational theories of physics. I was looking for more good military SF after enjoying Artifact Space so much (and still eagerly awaiting the sequel), so I picked this up. It has some of the same elements: the military as a place where you can make a fresh start with found family elements, the equalizing effects of military assignments, and the merits of good leadership. They're a bit disguised here, since this is a crew of often-hostile misfits under a lot of stress with a partly checked-out captain, but they do surface towards the end of the book. The strength of this book is the mystery of the contracting universe, which poses both an immediate threat to the ship and a longer-term potential threat to, well, everything. The first part of the book builds tension with the immediate threat, but the story comes into its own when the crew starts piecing together the connections between the Viators and the Divide while jury-rigging technology and making risky choices between a lot of bad options. This is the first half of a duology, so the mysteries are not resolved here, but they do reach a satisfying and tantalizing intermediate conclusion. The writing is servicable and adequate, but it's a bit clunky in places. Dewes doesn't quite have the balance right between setting the emotional stakes and not letting the characters indulge in rumination. Rake is a good captain who is worn down and partly checked out, Mercer is scared and hiding it with arrogance and will do well when given the right sort of attention, and all of this is reasonably obvious early on and didn't need as many of the book's pages as it gets. I could have done without the romantic subplot, which I thought was an unnecessary distraction from the plot and turned into a lot of tedious angst, but I suspect I was not the target audience. (Writers, please remember that people can still care about each other and be highly motivated by fear for each other without being romantic partners.) I would not call this a great book. The characters are not going to surprise you that much, and it's a bit long for the amount of plot that it delivers. If you are the sort of person who nit-picks the physics of SF novels and gets annoyed at writers who don't understand how big the universe is, you will have to take a deep breath and hold on to your suspension of disbelief. But Dewes does a good job with ratcheting up the tension and conveying an atmosphere of mysterious things happening at the edge of nowhere, while still keeping it in the genre of mysterious technology and mind-boggingly huge physical phenomena rather than space horror. If you've been looking for that sort of book, this will do. I was hooked and will definitely read the sequel. Followed by The Exiled Fleet. Rating: 7 out of 10

29 September 2023

Scarlett Gately Moore: KDE: Another Busy Week! KDE neon, Debian, Snaps Oh My!

KDE Plasma 6KDE Plasma 6
I would like to welcome you to my revamped site. It is still a work in progress, so please be patient while I work out the kinks! I have also explained a bit more about myself in my About Me page for those that may have questions about my homesteader lifestyle. Check it out when you have time. My site is mostly my adventures in packaging software in Linux in a variety of formats ( mostly Debian and Ubuntu Snaps containerized packages ). This keeps me very busy, as folks don t realize the importance of packaging. Without it, applications remain in source code form which isn t very usable by the users! While turning the source code into something user friendly we often run into issues and work with upstream ( I am a very strong believer in upstream first ) to resolve any issues. This makes for a better user experience and less buggy software. Workarounds are very hard to maintain and thus fixing it right the first time is the best path! With this said, while I am not strong in any one programming language ( Well maybe Ruby from my CI tooling background ) I am versed in many languages, as I have to understand the code that I am filing bug reports for! We have to have a strong knowledge of being able to understand build failures, debug runtime failures and most importantly we have to be able to fix them, or find the resources to assist in fixing them. As most of you know I am KDE s biggest fan ( There is nothing wrong with Gnome, its a great platform ). So a big portion of my work is dedicated to KDE. A fantastic tool for working on my KDE packaging has been KDE Neon! With the developer version I have all the tools necessary to debug and fix issues that arise. There is also the added bonus of living on the edge and finding out runtime issues right away! That is enough about me for now and on to my weekly round up! KDE neon: Carlos ( check out his new blog! https://www.ethicalconstruct.au/dotclear_blog/ ) and I have been very busy with another round of KDE applications making the move to Qt6. We have finished KDE PIM and KDE Games in Neon/unstable! I have worked out issues with print-manager and re-enabled it in experimental as it s qt6 development is still happening in kf6 branch. Instructions here: https://blog.neon.kde.org/2023/08/04/announcing-kde-neon-experimental/ Fixed issues with a broken kscreenlocker and missing window decorations. You can now safely leave your computer and not worry about that dreaded black screen. Debian: I have uploaded the newest squashfuse to unstable. I have uploaded another NEW dependency for bubblegum golang-github-alecthomas-mango-kong-dev Ubuntu Snaps: This week continues working closely with Jarred Wilson of Canonical in getting his Qt6 content snap in shape for use with my KDE Frameworks 6 snap ( an essential snap to move forward with our next generation Qt6 applications and of course the Plasma snap. I spent some time debugging the neochat snap and fixed some QML issues, but I am now facing issues with wayland. It now works fine for those of us still on X11. I will continue working out wayland. Thank you! I rely on donations to upkeep my everyday living and so far thanks to each and every one of you I have survived almost a full year! It has been scary from time to time, but I am surviving. Until my snap project goes through I must rely on the kindness of my supporters. The proceeds of my donations goes to the following: I have joined the kool kids and moved to Donorbox for donations. Donate I still have Gofundme for those that don t want to signup for yetanotherdonationplatform. https://gofund.me/b8b69e54

21 September 2023

Jonathan McDowell: DebConf23 Writeup

DebConf2023 Logo (I wrote this up for an internal work post, but I figure it s worth sharing more publicly too.) I spent last week at DebConf23, this years instance of the annual Debian conference, which was held in Kochi, India. As usual, DebConf provides a good reason to see a new part of the world; I ve been going since 2004 (Porto Alegre, Brazil), and while I ve missed a few (Mexico, Bosnia, and Switzerland) I ve still managed to make it to instances on 5 continents. This has absolutely nothing to do with work, so I went on my own time + dime, but I figured a brief write-up might prove of interest. I first installed Debian back in 1999 as a machine that was being co-located to operate as a web server / email host. I was attracted by the promise of easy online upgrades (or, at least, upgrades that could be performed without the need to be physically present at the machine, even if they naturally required a reboot at some point). It has mostly delivered on this over the years, and I ve never found a compelling reason to move away. I became a Debian Developer in 2000. As a massively distributed volunteer project DebConf provides an opportunity to find out what s happening in other areas of the project, catch up with team mates, and generally feel more involved and energised to work on Debian stuff. Also, by this point in time, a lot of Debian folk are good friends and it s always nice to catch up with them. On that point, I felt that this year the hallway track was not quite the same as usual. For a number of reasons (COVID, climate change, travel time, we re all getting older) I think fewer core teams are achieving critical mass at DebConf - I was the only member physically present from 2 teams I m involved in, and I d have appreciated the opportunity to sit down with both of them for some in-person discussions. It also means it s harder to use DebConf as a venue for advancing major changes; previously having all the decision makers in the same space for a week has meant it s possible to iron out the major discussion points, smoothing remote implementation after the conference. I m told the mini DebConfs are where it s at for these sorts of meetings now, so perhaps I ll try to attend at least one of those next year. Of course, I also went to a bunch of talks. I have differing levels of comment about each of them, but I ve written up some brief notes below about the ones I remember something about. The comment was made that we perhaps had a lower level of deep technical talks, which is perhaps true but I still think there were a number of high level technical talks that served to pique ones interest about the topic. Finally, this DebConf was the first I m aware of that was accompanied by tragedy; as part of the day trip Abraham Raji, a project member and member of the local team, was involved in a fatal accident.

Talks (videos not yet up for all, but should appear for most)
  • Opening Ceremony
    Not much to say here; welcome to DebConf!
  • Continuous Key-Signing Party introduction
    I ended up running this, as Gunnar couldn t make it. Debian makes heavy use of the OpenPGP web of trust (no mass ability to send out Yubikeys + perform appropriate levels of identity verification), so making sure we re appropriately cross-signed, and linked to local conference organisers, is a dull but important part of the conference. We use a modified keysigning approach where identity verification + fingerprint confirmation happens over the course of the conference, so this session was just to explain how that works and confirm we were all working from the same fingerprint list.
  • State of Stateless - A Talk about Immutability and Reproducibility in Debian
    Stateless OSes seem to be gaining popularity, so I went along to this to see if there was anything of note. It was interesting, but nothing earth shattering - very high level.
  • What s missing so that Debian is finally reproducible?
    Reproducible builds are something I ve been keeping an eye on for a long time, and I continue to be impressed by the work folks are putting into this - both for Debian, and other projects. From a security standpoint reproducible builds provide confidence against trojaned builds, and from a developer standpoint knowing you can build reproducibly helps with not having to keep a whole bunch of binary artefacts around.
  • Hello from keyring-maint
    In the distant past the process of getting your OpenPGP key into the Debian keyring (which is used to authenticate uploads + votes, amongst other things) was a clunky process that was often stalled. This hasn t been the case for at least the past 10 years, but there s still a residual piece of project memory that thinks keyring is a blocker. So as a team we say hi and talk about the fact we do monthly updates and generally are fairly responsive these days.
  • A declarative approach to Linux networking with Netplan
    Debian s /etc/network/interfaces is a fairly basic (if powerful) mechanism for configuring network interfaces. NetworkManager is a better bet for dynamic hosts (i.e. clients), and systemd-network seems to be a good choice for servers (I m gradually moving machines over to it). Netplan tries to provide a unified mechanism for configuring both with a single configuration language. A noble aim, but I don t see a lot of benefit for anything I use - my NetworkManager hosts are highly dynamic (so no need to push shared config) and systemd-network (or /etc/network/interfaces) works just fine on the other hosts. I m told Netplan has more use with more complicated setups, e.g. when OpenVSwitch is involved.
  • Quick peek at ZFS, A too good to be true file system and volume manager.
    People who use ZFS rave about it. I m naturally suspicious of any file system that doesn t come as part of my mainline kernel. But, as a longtime cautious mdraid+lvm+ext4 user I appreciate that there have been advances in the file system space that maybe I should look at, and I ve been trying out btrfs on more machines over the past couple of years. I can t deny ZFS has a bunch of interesting features, but nothing I need/want that I can t get from an mdraid+lvm+btrfs stack (in particular data checksumming + reflinks for dedupe were strong reasons to move to btrfs over ext4).
  • Bits from the DPL
    Exactly what it says on the tin; some bits from the DPL.
  • Adulting
    Enrico is always worth hearing talk; Adulting was no exception. Main takeaway is that we need to avoid trying to run the project on martyrs and instead make sure we build a sustainable project. I ve been trying really hard to accept I just don t have time to take on additional responsibilities, no matter how interesting or relevant they might seem, so this resonated.
  • My life in git, after subversion, after CVS.
    Putting all of your home directory in revision control. I ve never made this leap; I ve got some Ansible playbooks that push out my core pieces of configuration, which is held in git, but I don t actually check this out directly on hosts I have accounts on. Interesting, but not for me.
  • EU Legislation BoF - Cyber Resilience Act, Product Liability Directive and CSAM Regulation
    The CRA seems to be a piece of ill informed legislation that I m going to have to find time to read properly. Discussion was a bit more alarmist than I personally feel is warranted, but it was a short session, had a bunch of folk in it, and even when I removed my mask it was hard to make myself understood.
  • What s new in the Linux kernel (and what s missing in Debian)
    An update from Ben about new kernel features. I m paying less attention to such things these days, so nice to get a quick overview of it all.
  • Intro to SecureDrop, a sort-of Linux distro
    Actually based on Ubuntu, but lots of overlap with Debian as a result, and highly customised anyway. Notable, to me, for using OpenPGP as some of the backend crypto support. I managed to talk to Kunal separately about some of the pain points around that, which was an interesting discussion - they re trying to move from GnuPG to Sequoia, primarily because of the much easier integration and lack of requirement for the more complicated GnuPG features that sometimes get in the way.
  • The Docker(.io) ecosystem in Debian
    I hate Docker. I m sure it s fine if you accept it wants to take over the host machine entirely, but when I ve played around with it that s not been the case. This talk was more about the difficulty of trying to keep a fast moving upstream with lots of external dependencies properly up to date in a stable release. Vendoring the deps and trying to get a stable release exception seems like the least bad solution, but it s a problem that affects a growing number of projects.
  • Chiselled containers
    This was kinda of interesting, but I think I missed the piece about why more granular packaging wasn t an option. The premise is you can take an existing .deb and chisel it into smaller components, which then helps separate out dependencies rather than pulling in as much as the original .deb would. This was touted as being useful, in particular, for building targeted containers. Definitely appealing over custom built userspaces for containers, but in an ideal world I think we d want the information in the main packaging and it becomes a lot of work.
  • Debian Contributors shake-up
    Debian Contributors is a great site for massaging your ego around contributions to Debian; it s also a useful point of reference from a data protection viewpoint in terms of information the project holds about contributors - everything is already public, but the Contributors website provides folk with an easy way to find their own information (with various configurable options about whether that s made public or not). T ssia is working on improving the various data feeds into the site, but realistically this is the responsibility of every Debian service owner.
  • New Member BOF
    I m part of the teams that help get new folk into Debian - primarily as a member of the New Member Front Desk, but also as a mostly inactive Application Manager. It s been a while since we did one of these sessions so the Front Desk/Debian Account Managers that were present did a panel session. Nothing earth shattering came out of it; like keyring-maint this is a team that has historically had problems, but is currently running smoothly.

Next.

Previous.